Network Optimization via Smooth Exact Penalty Functions Enabled by Distributed Gradient Computation

نویسندگان

چکیده

This article proposes a distributed algorithm for network of agents to solve an optimization problem with separable objective function and locally coupled constraints. Our strategy is based on reformulating the original constrained as unconstrained smooth (continuously differentiable) exact penalty function. Computing gradient this in way challenging even under separability assumptions problem. technical approach shows that computation can be formulated system linear algebraic equations defined by data. To it, we design exponentially fast, input-to-state stable does not require individual agent matrices invertible. We employ compute at current state. algorithmic solver interconnects estimation prescription having follow resulting direction. Numerical simulations illustrate convergence robustness properties proposed algorithm.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Mathematical Programming Manuscript No. Smooth Exact Penalty and Barrier Functions for Nonsmooth Optimization

For constrained nonsmooth optimization problems, continuously diierentiable penalty functions and barrier functions are given. They are proved exact in the sense that under some nondegeneracy assumption, local optimizers of a nonlinear program are also optimizers of the associated penalty or barrier function. This is achieved by augmenting the dimension of the program by a variable that control...

متن کامل

First- and Second-Order Necessary Conditions Via Exact Penalty Functions

In this paper we study firstand second-order necessary conditions for nonlinear programming problems from the viewpoint of exact penalty functions. By applying the variational description of regular subgradients, we first establish necessary and sufficient conditions for a penalty term to be of KKT-type by using the regular subdifferential of the penalty term. In terms of the kernel of the subd...

متن کامل

Globalizing Stabilized Sqp by Smooth Primal-dual Exact Penalty Function

An iteration of the stabilized sequential quadratic programming method (sSQP) consists in solving a certain quadratic program in the primal-dual space, regularized in the dual variables. The advantage with respect to the classical sequential quadratic programming (SQP) is that no constraint qualifications are required for fast local convergence (i.e., the problem can be degenerate). In particul...

متن کامل

Steering Exact Penalty Methods for Optimization

This paper reviews, extends and analyzes a new class of penalty methods for nonlinear optimization. These methods adjust the penalty parameter dynamically; by controlling the degree of linear feasibility achieved at every iteration, they promote balanced progress toward optimality and feasibility. In contrast with classical approaches, the choice of the penalty parameter ceases to be a heuristi...

متن کامل

Augmented Lagrangian methods with smooth penalty functions

Since the late 1990s, the interest in augmented Lagrangian methods has been revived, and several models with smooth penalty functions for programs with inequality constraints have been proposed, tested and used in a variety of applications. Global convergence results for some of these methods have been published. Here we present a local convergence analysis for a large class of smooth augmented...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Control of Network Systems

سال: 2021

ISSN: ['2325-5870', '2372-2533']

DOI: https://doi.org/10.1109/tcns.2021.3068361